We propose a novel parametric 360° renderable head model built from artist-designed high-fidelity 3D head models, disentangling facial motion/shape and appearance. The model is the first parametric 3D full-head that achieves 360° free-view synthesis, image-based fitting, appearance editing, and animation within a single model.
ECCV 2024 Project Page Code
We present a novel approach for synthesizing 3D talking heads with controllable emotion, enhancing lip synchronization and rendering quality. To address multi-view consistency and emotional expressiveness issues, we propose a ‘Speech-to-Geometry-to-Appearance’ mapping framework trained on the EmoTalk3D dataset, enabling controllable emotion, wide-range view rendering, and fine facial details.
ECCV 2024 Project Page Code
We propose STAG4D, a novel framework for high-quality 4D generation, integrating pre-trained diffusion models with dynamic 3D Gaussian splatting. Our method outperforms prior 4D generation works in rendering quality, spatial-temporal consistency, and generation robustness, setting a new state-of-the-art for 4D generation from diverse inputs, including text, image, and video.
ECCV 2024 Project Page Code
We introduce a method for animating human images, using the SMPL 3D human parametric model within a latent diffusion framework to improve shape alignment and motion guidance. By incorporating various maps and skeleton-based guidance, we enrich the model with detailed 3D shape and pose attributes, fusing them via a multi-layer motion fusion module with self-attention mechanisms.
ECCV 2024 Project Page Code
We present a novel differentiable point-based rendering framework for material and lighting decomposition from multi-view images, enabling editing, ray-tracing, and real-time relighting of the 3D point cloud. Our framework showcases the potential to revolutionize the mesh-based graphics pipeline with a relightable, traceable, and editable rendering pipeline solely based on point cloud.
ECCV 2024 Project Page Code
We propose a fully differentiable framework named neural ambient illumination (NeAI), which incorporates Neural Radiance Fields (NeRF) as a physically-based lighting model to handle complex lighting. Our method integrates physically based rendering into NeRF, utilizing roughness-adaptive specular lobe encoding and precise decomposition via the pre-convoluted background.
AAAI 2024 Project Page
We propose VividTalk, a two-stage generic framework that supports generating high-visual quality talking head videos with all the above properties. Extensive experiments show that the proposed VividTalk can generate high-visual quality talking head videos with lip-sync and realistic enhanced by a large margin.
3DV 2025 Project Page
We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction. By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
TPAMI 2024 Code & Dataset
We propose a novel method, AvatarBooth, for generating high-quality 3D avatars from text prompts or photos. Unlike previous approaches that are based on only text descriptions, our method enables the users to customize the generated avatars according to casually captured photos of the face or full body.
arXiv 2023 Project Page
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering. A multi-scale tri-plane-based scene representation is introduced to capture the LoD of the signed distance function (SDF) and the space radiance.
SIGGRAPH Aisa 2023 Conf.